Multimodal Excitatory Interfaces with Automatic Content Classification

نویسندگان

  • John Williamson
  • Roderick Murray-Smith
چکیده

We describe an excitation interface for displaying data on mobile devices, based around active exploration: devices are shaken, revealing the contents rattling around inside. This combines samplebased contact sonification with event-playback vibrotactile feedback for a rich and compelling display. Motion is sensed from accelerometers, directly linking the motions of the user to the feedback they receive in a tightlyclosed loop. The resulting interface requires no visual attention, and can be operated blindly with a single hand: it is reactive rather than disruptive. This interaction style is applied to the display of an SMS inbox. We use language models to extract salient features from text messages automatically. The output of this classification process controls the timbre and physical dynamics of the simulated objects. The interface gives a rapid semantic overview of the contents of an inbox, without compromising privacy or interrupting the user.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Emotional Facial Expression Classification for Multimodal User Interfaces

We present a simple and computationally feasible method to perform automatic emotional classification of facial expressions. We propose the use of 10 characteristic points (that are part of the MPEG4 feature points) to extract relevant emotional information (basically five distances, presence of wrinkles and mouth shape). The method defines and detects the six basic emotions (plus the neutral o...

متن کامل

Individualizing the New Interfaces: Extraction of User's Emotions from Facial Data

When developing new multimodal user interfaces emotional user in-fornation may be of great interest. In this paper we present a simple and computationally feasible method to perform automatic emotional classification of facial expressions. We propose the use of 10 characteristic points (that are part of the MPEG4 feature points) to extract relevant emotional information (basically five distance...

متن کامل

Synthesis of Environmental Sounds in Interactive Multimodal Systems

This review paper discusses the literature on perception and synthesis of environmental sounds. Relevant studies in ecological acoustics and multimodal perception are reviewed, and physicallybased sound synthesis techniques for various families of environmental sounds are compared. Current research directions an d open issues, including multimodal interfaces and virtal environments, automatic r...

متن کامل

Multimodal Communication from Multimodal Thinking - towards an Integrated Model of Speech and Gesture Production

A computational model for the automatic production of combined speech and iconic gesture is presented. The generation of multimodal behavior is grounded in processes of multimodal thinking, in which a propositional representation interacts and interfaces with an imagistic representation of visuo-spatial imagery. An integrated architecture for this is described, in which the planning of content ...

متن کامل

Designing personalised, multilingual speech and language based applications

In this paper, we outline the challenges facing interaction designers of next generation personalised, multilingual speech and language based interfaces. The ever-increasing amount of digital content requires the application of language technologies such as machine translation, text categorisation and automatic speech recognition and synthesis in order to make the content accessible to a wide r...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2010